Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 61
Filtrar
1.
Ann Surg ; 277(4): 704-711, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-34954752

RESUMO

OBJECTIVE: To gather validity evidence supporting the use and interpretation of scores from the American College of Surgeons Entering Resident Readiness Assessment (ACS ERRA) Program. SUMMARY AND BACKGROUND DATA: ACS ERRA is an online formative assessment program developed to assess entering surgery residents' ability to make critical clinical decisions, and includes 12 clinical areas and 20 topics identified by a national panel of surgeon educators and residency program directors. METHODS: Data from 3 national testing administrations of ACS ERRA (2018-2020) were used to gather validity evidence regarding content, response process, internal structure (reliability), relations to other variables, and consequences. RESULTS: Over the 3 administrations, 1975 surgery residents participated from 125 distinct residency programs. Overall scores [Mean = 64% (SD = 7%)] remained consistent across the 3 years ( P = 0.670). There were no significant differences among resident characteristics (gender, age, international medical graduate status). The mean case discrimination index was 0.54 [SD = 0.15]. Kappa inter-rater reliability for scoring was 0.87; the overall test score reliability (G-coefficient) was 0.86 (Ф-coefficient = 0.83). Residents who completed residency readiness programs had higher ACS ERRA scores (66% versus 63%, Cohen's d = 0.23, P < 0.001). On average, 15% of decisions made (21/140 per test) involved potentially harmful actions. Variability in scores from graduating medical schools (7%) carried over twice as much weight than from matched residency programs (3%). CONCLUSIONS: ACS ERRA scores provide valuable information to entering surgery residents and surgery program directors to aid in development of individual and group learning plans.


Assuntos
Internato e Residência , Cirurgiões , Humanos , Estados Unidos , Reprodutibilidade dos Testes , Avaliação de Programas e Projetos de Saúde , Competência Clínica , Educação de Pós-Graduação em Medicina
2.
Acad Med ; 96(8): 1079-1080, 2021 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-36047866
3.
Ann Surg ; 272(1): 194-198, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-30870178

RESUMO

OBJECTIVE: To assess the readiness of entering residents for clinical responsibilities, the American College of Surgeons (ACS) Division of Education developed the "Entering Resident Readiness Assessment" (ACS-ERRA) Program. SUMMARY BACKGROUND: ACS-ERRA is an online formative assessment that uses a key features approach to measure clinical decision-making skills and focuses on cases encountered at the beginning of residency. Results can be used to develop learning plans to address areas that may need reinforcement. METHODS: A national panel of 16 content experts, 3 medical educators, and a psychometrician developed 98 short, key features cases. Each case required medical knowledge to be applied appropriately at challenging decision points during case management. Four pilot testing studies were conducted sequentially to gather validity evidence. RESULTS: Residents from programs across the United States participated in the studies (n = 58, 20, 87, 154, respectively). Results from the pilot studies enabled improvements after each pilot test. For the psychometric pilot (final pilot test), 2 parallel test forms of the ACS-ERRA were administered, each containing 40 cases, resulting in overall mean testing time of 2 hours 2 minutes (SD = 43 min). The mean test score was 61% (SD = 9%) and the G-coefficient reliability was 0.90. CONCLUSIONS: Results can be used to identify strengths and weaknesses in residents' decision-making skills and yield valuable information to create individualized learning plans. The data can also support efforts directed at the transition into residency training and inform discussions about levels of supervision. In addition, surgery program directors can use the aggregate test results to make curricular changes.


Assuntos
Educação de Pós-Graduação em Medicina , Avaliação Educacional , Cirurgia Geral/educação , Internato e Residência , Competência Clínica , Tomada de Decisões , Humanos , Projetos Piloto , Sociedades Médicas , Estados Unidos
4.
Med Educ ; 53(7): 710-722, 2019 07.
Artigo em Inglês | MEDLINE | ID: mdl-30779204

RESUMO

CONTEXT: The script concordance test (SCT), designed to measure clinical reasoning in complex cases, has recently been the subject of several critical research studies. Amongst other issues, response process validity evidence remains lacking. We explored the response processes of experts on an SCT scoring panel to better understand their seemingly divergent beliefs about how new clinical data alter the suitability of proposed actions within simulated patient cases. METHODS: A total of 10 Argentine gastroenterologists who served as the expert panel on an existing SCT re-answered 15 cases 9 months after their original panel participation. They then answered questions probing their reasoning and reactions to other experts' perspectives. RESULTS: The experts sometimes noted they would not ordinarily consider the actions proposed for the cases at all (30/150 instances [20%]) or would collect additional data first (54/150 instances [36%]). Even when groups of experts agreed about how new clinical data in a case affected the suitability of a proposed action, there was often disagreement (118/133 instances [89%]) about the suitability of the proposed action before the new clinical data had been introduced. Experts reported confidence in their responses, but showed limited consistency with the responses they had given 9 months earlier (linear weighted kappa = 0.33). Qualitative analyses showed nuanced and complex reasons behind experts' responses, revealing, for example, that experts often considered the unique affordances and constraints of their varying local practice environments when responding. Experts generally found other experts' alternative responses moderately compelling (mean ± standard deviation 2.93 ± 0.80 on a 5-point scale, where 3 = moderately compelling). Experts switched their own preferred responses after seeing others' reasoning in 30 of 150 (20%) instances. CONCLUSIONS: Expert response processes were not consistent with the classical interpretation and use of SCT scores. However, several fruitful and justifiable alternatives for the use of SCT-like methods are proposed, such as to guide assessments for learning.


Assuntos
Competência Clínica , Tomada de Decisões , Prova Pericial , Gastroenterologistas/educação , Inquéritos e Questionários , Argentina , Educação Médica Continuada , Avaliação Educacional , Humanos , Estudos Prospectivos , Reprodutibilidade dos Testes
5.
Acad Med ; 94(2): 259-266, 2019 02.
Artigo em Inglês | MEDLINE | ID: mdl-30379661

RESUMO

PURPOSE: Medical educators use key features examinations (KFEs) to assess clinical decision making in many countries, but not in U.S. medical schools. The authors developed an online KFE to assess third-year medical students' decision-making abilities during internal medicine (IM) clerkships in the United States. They used Messick's unified validity framework to gather validity evidence regarding response process, internal structure, and relationship to other variables. METHOD: From February 2012 through January 2013, 759 students (at eight U.S. medical schools) had 75 minutes to complete one of four KFE forms during their IM clerkship. They also completed a survey regarding their experiences. The authors performed item analyses and generalizability studies, comparing KFE scores with prior clinical experience and National Board of Medical Examiners Subject Examination (NBME-SE) scores. RESULTS: Five hundred fifteen (67.9%) students consented to participate. Across KFE forms, mean scores ranged from 54.6% to 60.3% (standard deviation 8.4-9.6%), and Phi-coefficients ranged from 0.36 to 0.52. Adding five cases to the most reliable form would increase the Phi-coefficient to 0.59. Removing the least discriminating case from the two most reliable forms would increase the alpha coefficient to, respectively, 0.58 and 0.57. The main source of variance came from the interaction of students (nested in schools) and cases. Correlation between KFE and NBME-SE scores ranged from 0.24 to 0.47 (P < .01). CONCLUSIONS: These results provide strong evidence for response-process and relationship-to-other-variables validity and moderate internal structure validity for using a KFE to complement other assessments in U.S. IM clerkships.


Assuntos
Estágio Clínico , Competência Clínica , Medicina Interna/educação , Tomada de Decisão Clínica , Humanos , Reprodutibilidade dos Testes , Estados Unidos
6.
Med Teach ; 40(11): 1195-1196, 2018 11.
Artigo em Inglês | MEDLINE | ID: mdl-30422035
7.
Med Educ ; 52(8): 851-860, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29574896

RESUMO

CONTEXT: In postgraduate medical programmes, the progressive development of autonomy places residents in situations in which they must cope with uncertainty. We explored the phenomenon of hesitation, triggered by uncertainty, in the context of the operating room in order to understand the social behaviours surrounding supervision and progressive autonomy. METHODS: Nine surgical residents and their supervising surgeons at a Canadian medical school were selected. Each resident-supervisor pair was observed during a surgical procedure and subsequently participated in separate post-observation, semi-structured interviews. Constructivist grounded theory was used to guide the collection and analysis of data. RESULTS: Three hesitation-related themes were identified: the principle of progress; the meaning of hesitation, and the judgement of competence. Supervisors and residents understood hesitation in the context of a core surgical principle we termed the 'principle of progress'. This principle reflects the supervisors' and residents' shared norm that maintaining progress throughout a surgical procedure is of utmost importance. Resident hesitation was perceived as the first indication of a disruption to this principle and was therefore interpreted by supervisors and residents alike as a sign of incompetence. This interpretation influenced the teaching-learning process during these moments when residents were working at the edge of their abilities. CONCLUSIONS: The principle of progress influences the meaning of hesitation which, in turn, shapes judgements of competence. This has important implications for teaching and learning in direct supervision settings such as surgery. Without efforts to change the perception that hesitation represents incompetence, these potential teaching-learning moments will not fully support progressive autonomy.


Assuntos
Competência Clínica , Cirurgia Geral/educação , Internato e Residência , Salas Cirúrgicas/normas , Incerteza , Canadá , Educação de Pós-Graduação em Medicina , Teoria Fundamentada , Humanos , Relações Interprofissionais , Cirurgiões
8.
Acad Med ; 92(11S Association of American Medical Colleges Learn Serve Lead: Proceedings of the 56th Annual Research in Medical Education Sessions): S12-S20, 2017 11.
Artigo em Inglês | MEDLINE | ID: mdl-29065018

RESUMO

PURPOSE: To examine validity evidence of local graduation competency examination scores from seven medical schools using shared cases and to provide rater training protocols and guidelines for scoring patient notes (PNs). METHOD: Between May and August 2016, clinical cases were developed, shared, and administered across seven medical schools (990 students participated). Raters were calibrated using training protocols, and guidelines were developed collaboratively across sites to standardize scoring. Data included scores from standardized patient encounters for history taking, physical examination, and PNs. Descriptive statistics were used to examine scores from the different assessment components. Generalizability studies (G-studies) using variance components were conducted to estimate reliability for composite scores. RESULTS: Validity evidence was collected for response process (rater perception), internal structure (variance components, reliability), relations to other variables (interassessment correlations), and consequences (composite score). Student performance varied by case and task. In the PNs, justification of differential diagnosis was the most discriminating task. G-studies showed that schools accounted for less than 1% of total variance; however, for the PNs, there were differences in scores for varying cases and tasks across schools, indicating a school effect. Composite score reliability was maximized when the PN was weighted between 30% and 40%. Raters preferred using case-specific scoring guidelines with clear point-scoring systems. CONCLUSIONS: This multisite study presents validity evidence for PN scores based on scoring rubric and case-specific scoring guidelines that offer rigor and feedback for learners. Variability in PN scores across participating sites may signal different approaches to teaching clinical reasoning among medical schools.


Assuntos
Competência Clínica , Educação de Graduação em Medicina/métodos , Simulação de Paciente , Faculdades de Medicina , Documentação/normas , Humanos , Anamnese/normas , Exame Físico/normas , Reprodutibilidade dos Testes
9.
Acad Med ; 91(11 Association of American Medical Colleges Learn Serve Lead: Proceedings of the 55th Annual Research in Medical Education Sessions): S24-S30, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-27779506

RESUMO

PURPOSE: Medical schools administer locally developed graduation competency examinations (GCEs) following the structure of the United States Medical Licensing Examination Step 2 Clinical Skills that combine standardized patient (SP)-based physical examination and the patient note (PN) to create integrated clinical encounter (ICE) scores. This study examines how different subcomponent scoring weights in a locally developed GCE affect composite score reliability and pass-fail decisions for ICE scores, contributing to internal structure and consequential validity evidence. METHOD: Data from two M4 cohorts (2014: n = 177; 2015: n = 182) were used. The reliability of SP encounter (history taking and physical examination), PN, and communication and interpersonal skills scores were estimated with generalizability studies. Composite score reliability was estimated for varying weight combinations. Faculty were surveyed for preferred weights on the SP encounter and PN scores. Composite scores based on Kane's method were compared with weighted mean scores. RESULTS: Faculty suggested weighting PNs higher (60%-70%) than the SP encounter scores (30%-40%). Statistically, composite score reliability was maximized when PN scores were weighted at 40% to 50%. Composite score reliability of ICE scores increased by up to 0.20 points when SP-history taking (SP-Hx) scores were included; excluding SP-Hx only increased composite score reliability by 0.09 points. Classification accuracy for pass-fail decisions between composite and weighted mean scores was 0.77; misclassification was < 5%. CONCLUSIONS: Medical schools and certification agencies should consider implications of assigning weights with respect to composite score reliability and consequences on pass-fail decisions.


Assuntos
Competência Clínica/normas , Educação de Graduação em Medicina/normas , Avaliação Educacional/métodos , Anamnese/normas , Exame Físico/normas , Estudos de Coortes , Humanos , Reprodutibilidade dos Testes , Inquéritos e Questionários , Estados Unidos
11.
Med Teach ; 38(11): 1100-1104, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27248314

RESUMO

The authors share 12 practical tips on creating effective titles and abstracts for a journal publication or conference presentation. When crafting a title authors should: (1) start thinking of the title from the start; (2) brainstorm many key words, create permutations, and ask others for input; (3) strive for an informative and indicative title; (4) start the title with the most important words; and (5) wait to finalize the title until the very end. When writing the abstract, authors should: (6) wait until the end to write the abstract; (7) copy and paste from main text as the starting point; (8) start with a detailed structured format; (9) describe what they did; (10) describe what they found; (11) highlight what readers can do with this information; and (12) ensure that the abstract aligns with the full text and conforms to submission guidelines.


Assuntos
Fator de Impacto de Revistas , Publicações Periódicas como Assunto/normas , Humanos , Redação/normas
12.
Med Teach ; 38(7): 752-3, 2016 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-26806122
13.
Adv Health Sci Educ Theory Pract ; 21(4): 761-73, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26757931

RESUMO

Recent changes to the patient note (PN) format of the United States Medical Licensing Examination have challenged medical schools to improve the instruction and assessment of students taking the Step-2 clinical skills examination. The purpose of this study was to gather validity evidence regarding response process and internal structure, focusing on inter-rater reliability and generalizability, to determine whether a locally-developed PN scoring rubric and scoring guidelines could yield reproducible PN scores. A randomly selected subsample of historical data (post-encounter PN from 55 of 177 medical students) was rescored by six trained faculty raters in November-December 2014. Inter-rater reliability (% exact agreement and kappa) was calculated for five standardized patient cases administered in a local graduation competency examination. Generalizability studies were conducted to examine the overall reliability. Qualitative data were collected through surveys and a rater-debriefing meeting. The overall inter-rater reliability (weighted kappa) was .79 (Documentation = .63, Differential Diagnosis = .90, Justification = .48, and Workup = .54). The majority of score variance was due to case specificity (13 %) and case-task specificity (31 %), indicating differences in student performance by case and by case-task interactions. Variance associated with raters and its interactions were modest (<5 %). Raters felt that justification was the most difficult task to score and that having case and level-specific scoring guidelines during training was most helpful for calibration. The overall inter-rater reliability indicates high level of confidence in the consistency of note scores. Designs for scoring notes may optimize reliability by balancing the number of raters and cases.


Assuntos
Competência Clínica/normas , Educação de Graduação em Medicina/normas , Avaliação Educacional/normas , Anamnese/normas , Exame Físico/normas , Diagnóstico Diferencial , Documentação , Humanos , Licenciamento em Medicina , Reprodutibilidade dos Testes , Estados Unidos
14.
Adv Health Sci Educ Theory Pract ; 21(4): 897-913, 2016 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-26590984

RESUMO

Despite multifaceted attempts to "protect the public," including the implementation of various assessment practices designed to identify individuals at all stages of training and practice who underperform, profound deficiencies in quality and safety continue to plague the healthcare system. The purpose of this reflections paper is to cast a critical lens on current assessment practices and to offer insights into ways in which they might be adapted to ensure alignment with modern conceptions of health professional education for the ultimate goal of improved healthcare. Three dominant themes will be addressed: (1) The need to redress unintended consequences of competency-based assessment; (2) The potential to design assessment systems that facilitate performance improvement; and (3) The importance of ensuring authentic linkage between assessment and practice. Several principles cut across each of these themes and represent the foundational goals we would put forward as signposts for decision making about the continued evolution of assessment practices in the health professions: (1) Increasing opportunities to promote learning rather than simply measuring performance; (2) Enabling integration across stages of training and practice; and (3) Reinforcing point-in-time assessments with continuous professional development in a way that enhances shared responsibility and accountability between practitioners, educational programs, and testing organizations. Many of the ideas generated represent suggestions for strategies to pilot test, for infrastructure to build, and for harmonization across groups to be enabled. These include novel strategies for OSCE station development, formative (diagnostic) assessment protocols tailored to shed light on the practices of individual clinicians, the use of continuous workplace-based assessment, and broadening the focus of high-stakes decision making beyond determining who passes and who fails. We conclude with reflections on systemic (i.e., cultural) barriers that may need to be overcome to move towards a more integrated, efficient, and effective system of assessment.


Assuntos
Avaliação Educacional , Ocupações em Saúde , Educação Baseada em Competências , Humanos , Segurança do Paciente , Melhoria de Qualidade
15.
Acad Med ; 90(11 Suppl): S56-62, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-26505103

RESUMO

BACKGROUND: To determine the psychometric characteristics of diagnostic justification scores based on the patient note format of the United States Medical Licensing Examination Step 2 Clinical Skills exam, which requires students to document history and physical findings, differential diagnoses, diagnostic justification, and plan for immediate workup. METHOD: End-of-third-year medical students at one institution wrote notes for five standardized patient cases in May 2013 (n = 180) and 2014 (n = 177). Each case was scored using a four-point rubric to rate each of the four note components. Descriptive statistics and item analyses were computed and a generalizability study done. RESULTS: Across cases, 10% to 48% provided no diagnostic justification or had several missing or incorrect links between history and physical findings and diagnoses. The average intercase correlation for justification scores ranged from 0.06 to 0.16; internal consistency reliability of justification scores (coefficient alpha across cases) was 0.38. Overall, justification scores had the highest mean item discrimination across cases. The generalizability study showed that person-case interaction (12%) and task-case interaction (13%) had the largest variance components, indicating substantial case specificity. CONCLUSIONS: The diagnostic justification task provides unique information about student achievement and curricular gaps. Students struggled to correctly justify their diagnoses; performance was highly case specific. Diagnostic justification was the most discriminating element of the patient note and had the greatest variability in student performance across cases. The curriculum should provide a wide range of clinical cases and emphasize recognition and interpretation of clinically discriminating findings to promote the development of clinical reasoning skills.


Assuntos
Competência Clínica , Tomada de Decisão Clínica , Educação de Graduação em Medicina , Licenciamento em Medicina , Anamnese , Exame Físico , Currículo , Feminino , Humanos , Masculino , Simulação de Paciente , Psicometria , Reprodutibilidade dos Testes , Estados Unidos
16.
Teach Learn Med ; 27(3): 299-306, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26158332

RESUMO

THEORY: Feedback and debriefing, as portrayed in expertise development and self-assessment, play critical roles in providing residents with useful information to foster their progress. HYPOTHESES: Prior work has shown that clinical preceptors' use of conceptual frameworks (CFs; ways of thinking based on theories, best practices, or models) while giving feedback to residents was positively associated with a greater diversity of responses. Also, senior preceptors produced more responses, used more CFs, and asked more probing-challenging questions than junior preceptors. The purpose was to confirm the generalization of these initial findings with a broader and better defined sample of preceptors. METHOD: We conducted a mixed-method study with 20 junior and 20 senior preceptors in a controlled environment to analyze their responses and rationales to residents' educational needs as portrayed in 6 written vignettes. The preceptors were recruited from 3 primary care specialties preceptors (family medicine, internal medicine, pediatrics) of the 3 French-speaking faculties of medicine in Québec, Canada. RESULTS: The preceptors increased the 2012 list of response topics (96 to 126) and doubled the number of distinct CFs (16 to 32). The junior and senior preceptors expressed the same number and diversity of CFs. On average, senior preceptors asked more clarification questions and reflected more than juniors on the learning process that occurs during case discussions. Preceptor specialty and prior training in medical education did not influence the number and diversity of responses and CFs, except that preceptors with prior training generated more responses per vignette and were more reflective. Senior preceptors had a stronger positive relationship between the number of total and distinct CFs and the number of responses than the juniors. CONCLUSIONS: Although senior preceptors did not give more responses or use more CFs compared to the prior study, they continue to probe residents more and reflected more. The positive relationship between responses and CFs has important implications for faculty development and calls for more research to better understand the specific contribution of CFs to feedback.


Assuntos
Internato e Residência , Avaliação das Necessidades , Preceptoria , Estudantes de Medicina , Feminino , Humanos , Entrevistas como Assunto , Masculino , Pesquisa Qualitativa
17.
Acad Med ; 90(11): 1541-6, 2015 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-25901877

RESUMO

PURPOSE: There is wide variability in how attending physician roles on teaching teams, including patient care and trainee learning, are enacted. This study sought to better understand variability by considering how different attendings configured and rationalized direct patient care, trainee oversight, and teaching activities. METHOD: Constructivist grounded theory guided iterative data collection and analyses. Data were interviews with 24 attending physicians from two academic centers in Ontario, Canada, in 2012. During interviews, participants heard a hypothetical presentation and reflected on it as though it were presented to their team during a typical admission case review. RESULTS: Four supervisory styles were identified: direct care, empowerment, mixed practice, and minimalist. Driven by concerns for patient safety, direct care involves delegating minimal patient care responsibility to trainees. Focused on supporting trainees' progressive independence, empowerment uses teaching and oversight strategies to ensure quality of care. In mixed practice, patient care is privileged over teaching and is adjusted on the basis of trainee competence and contextual features such as patient volume. Minimalist style involves a high degree of trust in senior residents, delegating most patient care, and teaching to them. Attendings rarely discussed their styles with the team. CONCLUSIONS: The model adds to the literature on variability in supervisory practice, showing that the four styles reflect different ways of responding to tensions in the role and context. This model could be refined through observational research exploring the impact of context on style development and enactment. Making supervisory styles explicit could support improvement of team competence.


Assuntos
Medicina Interna/educação , Corpo Clínico Hospitalar , Modelos Educacionais , Papel do Médico , Competência Clínica , Humanos , Relações Interprofissionais , Entrevistas como Assunto , Ontário , Equipe de Assistência ao Paciente , Ensino
18.
Med Teach ; 37(4): 379-84, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25156235

RESUMO

BACKGROUND: SNAPPS is a learner-centered approach to case presentations that was shown, in American studies, to facilitate the expression of clinical reasoning and uncertainties in the outpatient setting. AIM: To evaluate the SNAPPS technique in an Asian setting. METHODS: We conducted a quasi-experimental trial comparing the SNAPPS technique to the usual-and-customary method of case presentations for fifth-year medical students in an ambulatory internal medicine clerkship rotation at Khon Kaen University, Thailand. We created four experimental groups to test main and maturation effects. We measured 12 outcomes at the end of the rotations: total, summary, and discussion presentation times, number of basic clinical findings, summary thoroughness, number of diagnoses in the differential, number of justified diagnoses, number of basic attributes supporting the differential, number of student-initiated questions or discussions about uncertainties, diagnosis, management, and reading selections. RESULTS: SNAPPS users (90 case presentations), compared with the usual group (93 presentations), had more diagnoses in their differentials (1.81 vs. 1.42), more basic attributes to support the differential (2.39 vs. 1.22), more expression of uncertainties (6.67% vs. 1.08%), and more student-initiated reading selections (6.67% vs. 0%). Presentation times did not differ between groups (12 vs. 11.2 min). There were no maturation effects detected. CONCLUSIONS: The use of the SNAPPS technique among Thai medical students during their internal medicine ambulatory care clerkship rotation did facilitate the expression of their clinical reasoning and uncertainties. More intense student-preceptor training is needed to better foster the expression of uncertainties.


Assuntos
Assistência Ambulatorial , Estágio Clínico/organização & administração , Competência Clínica , Medicina Interna/educação , Modelos Educacionais , Humanos , Aprendizagem , Tailândia , Fatores de Tempo , Incerteza
19.
Adv Health Sci Educ Theory Pract ; 20(1): 85-100, 2015 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24823793

RESUMO

Internists are required to perform a number of procedures that require mastery of technical and non-technical skills, however, formal assessment of these skills is often lacking. The purpose of this study was to develop, implement, and gather validity evidence for a procedural skills objective structured clinical examination (PS-OSCE) for internal medicine (IM) residents to assess their technical and non-technical skills when performing procedures. Thirty-five first to third-year IM residents participated in a 5-station PS-OSCE, which combined partial task models, standardized patients, and allied health professionals. Formal blueprinting was performed and content experts were used to develop the cases and rating instruments. Examiners underwent a frame-of-reference training session to prepare them for their rater role. Scores were compared by levels of training, experience, and to evaluation data from a non-procedural OSCE (IM-OSCE). Reliability was calculated using Generalizability analyses. Reliabilities for the technical and non-technical scores were 0.68 and 0.76, respectively. Third-year residents scored significantly higher than first-year residents on the technical (73.5 vs. 62.2%) and non-technical (83.2 vs. 75.1%) components of the PS-OSCE (p < 0.05). Residents who had performed the procedures more frequently scored higher on three of the five stations (p < 0.05). There was a moderate disattenuated correlation (r = 0.77) between the IM-OSCE and the technical component of the PS-OSCE scores. The PS-OSCE is a feasible method for assessing multiple competencies related to performing procedures and this study provides validity evidence to support its use as an in-training examination.


Assuntos
Competência Clínica , Educação de Pós-Graduação em Medicina/normas , Avaliação Educacional/métodos , Medicina Interna/educação , Internato e Residência , Adulto , Feminino , Humanos , Masculino , Modelos Educacionais , Ontário , Reprodutibilidade dos Testes
20.
Med Educ ; 48(10): 1020-7, 2014 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-25200022

RESUMO

OBJECTIVES: Despite significant evidence supporting the use of three-option multiple-choice questions (MCQs), these are rarely used in written examinations for health professions students. The purpose of this study was to examine the effects of reducing four- and five-option MCQs to three-option MCQs on response times, psychometric characteristics, and absolute standard setting judgements in a pharmacology examination administered to health professions students. METHODS: We administered two versions of a computerised examination containing 98 MCQs to 38 Year 2 medical students and 39 Year 3 pharmacy students. Four- and five-option MCQs were converted into three-option MCQs to create two versions of the examination. Differences in response time, item difficulty and discrimination, and reliability were evaluated. Medical and pharmacy faculty judges provided three-level Angoff (TLA) ratings for all MCQs for both versions of the examination to allow the assessment of differences in cut scores. RESULTS: Students answered three-option MCQs an average of 5 seconds faster than they answered four- and five-option MCQs (36 seconds versus 41 seconds; p = 0.008). There were no significant differences in item difficulty and discrimination, or test reliability. Overall, the cut scores generated for three-option MCQs using the TLA ratings were 8 percentage points higher (p = 0.04). CONCLUSIONS: The use of three-option MCQs in a health professions examination resulted in a time saving equivalent to the completion of 16% more MCQs per 1-hour testing period, which may increase content validity and test score reliability, and minimise construct under-representation. The higher cut scores may result in higher failure rates if an absolute standard setting method, such as the TLA method, is used. The results from this study provide a cautious indication to health professions educators that using three-option MCQs does not threaten validity and may strengthen it by allowing additional MCQs to be tested in a fixed amount of testing time with no deleterious effect on the reliability of the test scores.


Assuntos
Educação de Graduação em Medicina/métodos , Educação em Farmácia/métodos , Avaliação Educacional/métodos , Inquéritos e Questionários/normas , Adulto , California , Feminino , Humanos , Masculino , Psicometria , Tempo de Reação , Reprodutibilidade dos Testes , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...